Goto

Collaborating Authors

 Taylor


"Why Are There No F-cking Jobs?" There's More Than Trump to the Vexing Employment Market.

Slate

Sign up for the Slatest to get the most insightful analysis, criticism, and advice out there, delivered to your inbox daily. In 2021, Zia graduated from the University of Michigan–Dearborn with a degree in software engineering. With an internship under his belt, he had no shortage of job opportunities, and he landed a contract coding gig in January of 2022. It was good work, for a year and a half, until he got laid off in mid-2023. After taking a month to figure out what he wanted to specialize in, Zia decided that he'd go for the types of app- and site-building jobs that had been so plentiful when he was in school.


Generative Zoo

Niewiadomski, Tomasz, Yiannakidis, Anastasios, Cuevas-Velasquez, Hanz, Sanyal, Soubhik, Black, Michael J., Zuffi, Silvia, Kulits, Peter

arXiv.org Artificial Intelligence

The model-based estimation of 3D animal pose and shape from images enables computational modeling of animal behavior. Training models for this purpose requires large amounts of labeled image data with precise pose and shape annotations. However, capturing such data requires the use of multi-view or marker-based motion-capture systems, which are impractical to adapt to wild animals in situ and impossible to scale across a comprehensive set of animal species. Some have attempted to address the challenge of procuring training data by pseudo-labeling individual real-world images through manual 2D annotation, followed by 3D-parameter optimization to those labels. While this approach may produce silhouette-aligned samples, the obtained pose and shape parameters are often implausible due to the ill-posed nature of the monocular fitting problem. Sidestepping real-world ambiguity, others have designed complex synthetic-data-generation pipelines leveraging video-game engines and collections of artist-designed 3D assets. Such engines yield perfect ground-truth annotations but are often lacking in visual realism and require considerable manual effort to adapt to new species or environments. Motivated by these shortcomings, we propose an alternative approach to synthetic-data generation: rendering with a conditional image-generation model. We introduce a pipeline that samples a diverse set of poses and shapes for a variety of mammalian quadrupeds and generates realistic images with corresponding ground-truth pose and shape parameters. To demonstrate the scalability of our approach, we introduce GenZoo, a synthetic dataset containing one million images of distinct subjects. We train a 3D pose and shape regressor on GenZoo, which achieves state-of-the-art performance on a real-world animal pose and shape estimation benchmark, despite being trained solely on synthetic data. https://genzoo.is.tue.mpg.de


A PAC-Bayesian Perspective on Structured Prediction with Implicit Loss Embeddings

Cantelobre, Théophile, Guedj, Benjamin, Pérez-Ortiz, María, Shawe-Taylor, John

arXiv.org Machine Learning

Many practical machine learning tasks can be framed as Structured prediction problems, where several output variables are predicted and considered interdependent. Recent theoretical advances in structured prediction have focused on obtaining fast rates convergence guarantees, especially in the Implicit Loss Embedding (ILE) framework. PAC-Bayes has gained interest recently for its capacity of producing tight risk bounds for predictor distributions. This work proposes a novel PAC-Bayes perspective on the ILE Structured prediction framework. We present two generalization bounds, on the risk and excess risk, which yield insights into the behavior of ILE predictors. Two learning algorithms are derived from these bounds.


A novel transfer learning method based on common space mapping and weighted domain matching

Liang, Ru-Ze, Xie, Wei, Li, Weizhi, Wang, Hongqi, Wang, Jim Jing-Yan, Taylor, Lisa

arXiv.org Machine Learning

In this paper, we propose a novel learning framework for the problem of domain transfer learning. We map the data of two domains to one single common space, and learn a classifier in this common space. Then we adapt the common classifier to the two domains by adding two adaptive functions to it respectively. In the common space, the target domain data points are weighted and matched to the target domain in term of distributions. The weighting terms of source domain data points and the target domain classification responses are also regularized by the local reconstruction coefficients. The novel transfer learning framework is evaluated over some benchmark cross-domain data sets, and it outperforms the existing state-of-the-art transfer learning methods.